20 research outputs found

    Temporal shape super-resolution by intra-frame motion encoding using high-fps structured light

    Full text link
    One of the solutions of depth imaging of moving scene is to project a static pattern on the object and use just a single image for reconstruction. However, if the motion of the object is too fast with respect to the exposure time of the image sensor, patterns on the captured image are blurred and reconstruction fails. In this paper, we impose multiple projection patterns into each single captured image to realize temporal super resolution of the depth image sequences. With our method, multiple patterns are projected onto the object with higher fps than possible with a camera. In this case, the observed pattern varies depending on the depth and motion of the object, so we can extract temporal information of the scene from each single image. The decoding process is realized using a learning-based approach where no geometric calibration is needed. Experiments confirm the effectiveness of our method where sequential shapes are reconstructed from a single image. Both quantitative evaluations and comparisons with recent techniques were also conducted.Comment: 9 pages, Published at the International Conference on Computer Vision (ICCV 2017

    Krill-eye : Superposition Compound Eye for Wide-Angle Imaging via GRIN Lenses

    No full text
    We propose a novel wide angle imaging system inspired by compound eyes of animals. Instead of using a single lens, well compensated for aberration, we used a number of simple lenses to form a compound eye which produces practically distortion-free, uniform images with angular variation. The images formed by the multiple lenses are superposed on a single surface for increased light efficiency. We use GRIN (gradient refractive index) lenses to create sharply focused images without the artifacts seen when using reflection based methods for X-ray astronomy. We show the theoretical constraints for forming a blur-free image on the image sensor, and derive a continuum between 1 : 1 flat optics for document scanners and curved sensors focused at infinity. Finally, we show a practical application of the proposed optics in a beacon to measure the relative rotation angle between the light source and the camera with ID information

    Color Photometric Stereo Using Multi-Band Camera Constrained by Median Filter and Occluding Boundary

    No full text
    One of the main problems faced by the photometric stereo method is that several measurements are required, as this method needs illumination from light sources from different directions. A solution to this problem is the color photometric stereo method, which conducts one-shot measurements by simultaneously illuminating lights of different wavelengths. However, the classic color photometric stereo method only allows measurements of white objects, while a surface-normal estimation of a multicolored object using this method is theoretically impossible. Therefore, it is necessary to add some constraints to estimate the surface normal of a multicolored object using the framework of the color photometric stereo method. In this study, a median filter is employed as the constraint condition of albedo, and the surface normal of the occluding boundary is employed as the constraint condition of the surface normal. By employing a median filter as the constraint condition, the smooth distribution of the albedo and normal is calculated while the sharp features at the boundary of different albedos and normals are preserved. The surface normal at the occluding boundary is propagated into the inner part of the object region, and forms the abstract shape of the object. Such a surface normal gives a great clue to be used as an initial guess to the surface normal. To demonstrate the effectiveness of this study, a measurement device that can realize the multispectral photometric stereo method with seven colors is employed instead of the classic color photometric stereo method with three colors
    corecore